Interconnect Performance Evaluation of SGI Altix 3700 Cray X1, Cray Opteron, and Dell PowerEdge
نویسندگان
چکیده
We study the performance of inter-process communication on four high-speed multiprocessor systems using a set of communication benchmarks. The goal is to identify certain limiting factors and bottlenecks with the interconnect of these systems as well as to compare between these interconnects. We used several benchmarks to examine network behavior under different communication patterns and number of communicating processors. Here we measured network bandwidth using point-to-point communication, collective communication, and dense communication patterns. The four platforms are: a 512-processor SGI Altix 3700 shared-memory machine using Itanium-2 1.6 GHz processors and interconnected by SGI NUMAlink-4 switch with 3.2 GB/s bandwidth per node; a 64-processor (single-streaming) Cray X1 shared-memory machine using 800 MHz processor with 16 processors per node and 32 1.6 GB/s full duplex links; a 128-processor Cray Opteron cluster using 2 GHz AMD Opteron processors and interconnected by Myrinet network; and a 512-processor Dell PowerEdge cluster with Intel Xeon 3.6 GHz processors interconnected by InfiniBand network. Our results show the impact of the network bandwidth and topology on the overall performance of each interconnect. Several network limitations are identified and analyzed.
منابع مشابه
Performance Comparison of Cray X 1 and Cray Opteron Cluster with Other Leading Platforms using HPCC and
The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of six leading supercomputers SGI Altix BX2, Cray X1, Cray Opteron Cluster, Dell Xeon cluster, NEC SX-8, Cray XT3 and IBM Blue Gene/L. These six systems use also six different networks (SGI NUMALINK4, Cray net...
متن کاملPerformance Comparison of Cray X 1 and Cray Opteron Cluster with Other Leading Platforms using HPCC and IMB Benchmarks
The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of six leading supercomputers SGI Altix BX2, Cray X1, Cray Opteron Cluster, Dell Xeon cluster, NEC SX-8 and IBM Blue Gene/L. These six systems use also six different networks (SGI NUMALINK4, Cray network, Myri...
متن کاملA Scalability Study of Columbia using the NAS Parallel Benchmarks
The Columbia system at the NASA Advanced Supercomputing (NAS) facility is a cluster of 20 SGI Altix nodes, each with 512 Itanium 2 processors and 1 terabyte (TB) of shared-access memory. Four of the nodes are organized as a 2048-processor capability-computing platform connected by two low-latency interconnects— NUMALink4 (NL4) and InfiniBand (IB). To evaluate the scalability of Columbia with re...
متن کاملNAS Experience with the Cray X1
A Cray X1 computer system was installed at the NASA Advanced Supercomputing (NAS) facility at NASA Ames Research Center in 2004. An evaluation study of this unique high performance computing (HPC) architecture, from the standpoints of processor and system performance, ease of use, and production computing readiness tailored to the needs of the NAS scientific community, was recently completed. T...
متن کاملMassively parallel quantum computer simulator
We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the so...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2006